final model
Studying multiplicity: an interview with Prakhar Ganesh
In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. We sat down with Prakhar Ganesh to learn about his work on responsible AI, which is focussed on the concept of multiplicity. We found out more about some of the projects he's been involved in, his future plans, and how he got into the field. Could you start with a quick introduction to yourself, where you're studying, and the broad topic of your research? My name is Prakhar Ganesh. I'm also affiliated with Mila, which is a research institute in Montreal. My supervisor is Professor Golnoosh Farnadi.
Appendix for Data Diversification: A Simple Strategy For Neural Machine Translation Xuan-Phi Nguyen
Finally, we describe the training setup for our back-translation experiments. We continue to differentiate our method from other existing works. Our method does not train multiple peer models with EM training either. In each round, a forward (or backward) model takes turn to play the "back-translation" role to train The role is switched in the next round. In other words, source and target are identical.
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > Canada (0.04)
- Europe > Germany > Berlin (0.04)
- (4 more...)
Appendix for Data Diversification: A Simple Strategy For Neural Machine Translation Xuan-Phi Nguyen
Finally, we describe the training setup for our back-translation experiments. We continue to differentiate our method from other existing works. Our method does not train multiple peer models with EM training either. In each round, a forward (or backward) model takes turn to play the "back-translation" role to train The role is switched in the next round. In other words, source and target are identical.
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > Canada (0.04)
- Europe > Germany > Berlin (0.04)
- (4 more...)
Why Alignment Must Precede Distillation: A Minimal Working Explanation
For efficiency, preference alignment is often performed on compact, knowledge-distilled (KD) models. We argue this common practice introduces a significant limitation by overlooking a key property of the alignment's reference model: its distributional recall. We show that the standard KD Align workflow diminishes the model's capacity to align rare yet desirable behaviors, even under strong preference signals. We instead demonstrate that reversing the pipeline (i.e., Align KD) is essential: alignment must first be performed on a high-recall reference before distillation. First, we provide a minimal working explanation of how the reference model constrains preference alignment objectives at a fundamental level. Second, we validate this theory in a controllable Mixture-of-Gaussians experiment, where low-recall anchoring consistently results in suboptimal model performance. Finally, we demonstrate that the same phenomenon holds in LLM alignment with the SmolLM2 family: models aligned after KD fail to effectively align target behaviors, resulting in substantially lower reward and target precision. In contrast, our proposed Align KD pipeline robustly aligns these behaviors, yielding models with superior target-oriented metrics and lower variance. Together, these results establish reference-model recall as a first-order design choice in alignment, offering a clear principle: alignment must precede distillation. The alignment of large language models (LLMs) with human preferences has emerged as a central challenge in modern AI research. Building on pretrained models with vast general knowledge, algorithms such as Reinforcement Learning from Human Feedback (RLHF; Ziegler et al. (2019); Stiennon et al. (2020); Ouyang et al. (2022)) via PPO (Schulman et al., 2017) and Direct Preference Optimization (DPO; Rafailov et al. (2023)) have become standard methods. RLHF generally formulates alignment as reward maximization under a Kullback-Leibler (KL) penalty to a fixed reference model, while DPO reparameterizes preference learning into a pairwise loss that still anchors to the same reference.
Iterative Learning of Computable Phenotypes for Treatment Resistant Hypertension using Large Language Models
Aldeia, Guilherme Seidyo Imai, Herman, Daniel S., La Cava, William G.
Large language models (LLMs) have demonstrated remarkable capabilities for medical question answering and programming, but their potential for generating interpretable computable phenotypes (CPs) is under-explored. In this work, we investigate whether LLMs can generate accurate and concise CPs for six clinical phenotypes of varying complexity, which could be leveraged to enable scalable clinical decision support to improve care for patients with hypertension. In addition to evaluating zero-short performance, we propose and test a synthesize, execute, debug, instruct strategy that uses LLMs to generate and it-eratively refine CPs using data-driven feedback. Our results show that LLMs, coupled with iterative learning, can generate interpretable and reasonably accurate programs that approach the performance of state-of-the-art ML methods while requiring significantly fewer training examples.
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.14)
- South America > Brazil (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Product vs. Process: Exploring EFL Students' Editing of AI-Generated Text for Expository Writing
Woo, David James, Yu, Yangyang, Guo, Kai, Huang, Yilin, Fung, April Ka Yeng
Text generated by artificial intelligence (AI) chatbots is increasingly used in English as a foreign language (EFL) writing contexts, yet its impact on students' expository writing process and compositions remains understudied. This research examines how EFL secondary students edit AI-generated text. Exploring editing behaviors in their expository writing process and in expository compositions, and their effect on human-rated scores for content, organization, language, and overall quality. Participants were 39 Hong Kong secondary students who wrote an expository composition with AI chatbots in a workshop. A convergent design was employed to analyze their screen recordings and compositions to examine students' editing behaviors and writing qualities. Analytical methods included qualitative coding, descriptive statistics, temporal sequence analysis, human-rated scoring, and multiple linear regression analysis. We analyzed over 260 edits per dataset, and identified two editing patterns: one where students refined introductory units repeatedly before progressing, and another where they quickly shifted to extensive edits in body units (e.g., topic and supporting sentences). MLR analyses revealed that the number of AI-generated words positively predicted all score dimensions, while most editing variables showed minimal impact. These results suggest a disconnect between students' significant editing effort and improved composition quality, indicating AI supports but does not replace writing skills. The findings highlight the importance of genre-specific instruction and process-focused writing before AI integration. Educators should also develop assessments valuing both process and product to encourage critical engagement with AI text.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Education > Curriculum > Subject-Specific Education (1.00)
- Education > Educational Setting > K-12 Education > Secondary School (0.55)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.47)
Neural networks for the prediction of peel force for skin adhesive interface using FEM simulation
Masarkar, Ashish, Gupta, Rakesh, Dingari, Naga Neehar, Rai, Beena
Studying the peeling behaviour of adhesives on skin is vital for advancing biomedical applications such as medical adhesives and transdermal patches. Traditional methods like experimental testing and finite element method (FEM), though considered gold standards, are resource-intensive, computationally expensive and time-consuming, particularly when analysing a wide material parameter space. In this study, we present a neural network-based approach to predict the minimum peel force (F_min) required for adhesive detachment from skin tissue, limiting the need for repeated FEM simulations and significantly reducing the computational cost. Leveraging a dataset generated from FEM simulations of 90 degree peel test with varying adhesive and fracture mechanics parameters, our neural network model achieved high accuracy, validated through rigorous 5-fold cross-validation. The final architecture was able to predict a wide variety of skin-adhesive peeling behaviour, exhibiting a mean squared error (MSE) of 3.66*10^-7 and a R^2 score of 0.94 on test set, demonstrating robust performance. This work introduces a reliable, computationally efficient method for predicting adhesive behaviour, significantly reducing simulation time while maintaining accuracy. This integration of machine learning with high-fidelity biomechanical simulations enables efficient design and optimization of skin-adhesive systems, providing a scalable framework for future research in computational dermato-mechanics and bio-adhesive material design.
- Asia > India (0.04)
- North America > United States (0.04)
Approximating Language Model Training Data from Weights
Morris, John X., Yin, Junjie Oscar, Kim, Woojeong, Shmatikov, Vitaly, Rush, Alexander M.
Modern language models often have open weights but closed training data. We formalize the problem of data approximation from model weights and propose several baselines and metrics. We develop a gradient-based approach that selects the highest-matching data from a large public text corpus and show its effectiveness at recovering useful data given only weights of the original and finetuned models. Even when none of the true training data is known, our method is able to locate a small subset of public Web documents can be used to train a model to close to the original model performance given models trained for both classification and supervised-finetuning. On the AG News classification task, our method improves performance from 65% (using randomly selected data) to 80%, approaching the expert benchmark of 88%. When applied to a model trained with SFT on MSMARCO web documents, our method reduces perplexity from 3.3 to 2.3, compared to an expert LLAMA model's perplexity of 2.0.
Predicting performance-related properties of refrigerant based on tailored small-molecule functional group contribution
Cao, Peilin, Geng, Ying, Feng, Nan, Zhang, Xiang, Qi, Zhiwen, Song, Zhen, Gani, Rafiqul
As current group contribution (GC) methods are mostly proposed for a wide size-range of molecules, applying them to property prediction of small refrigerant molecules could lead to unacceptable errors. In this sense, for the design of novel refrigerants and refrigeration systems, tailoring GC-based models specifically fitted to refrigerant molecules is of great interest. In this work, databases of potential refrigerant molecules are first collected, focusing on five key properties related to the operational efficiency of refrigeration systems, namely normal boiling point, critical temperature, critical pressure, enthalpy of vaporization, and acentric factor. Based on tailored small-molecule groups, the GC method is combined with machine learning (ML) to model these performance-related properties. Following the development of GC-ML models, their performance is analyzed to highlight the potential group-to-property contributions. Additionally, the refrigerant property databases are extended internally and externally, based on which examples are presented to highlight the significance of the developed models.
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Trust in Disinformation Narratives: a Trust in the News Experiment
Song, Hanbyul, Silva, Miguel F. Santos, Suau, Jaume, Espinosa-Anke, Luis
Understanding why people trust or distrust one another, institutions, or information is a complex task that has led scholars from various fields of study to employ diverse epistemological and methodological approaches. Despite the challenges, it is generally agreed that the antecedents of trust (and distrust) encompass a multitude of emotional and cognitive factors, including a general disposition to trust and an assessment of trustworthiness factors. In an era marked by increasing political polarization, cultural backlash, widespread disinformation and fake news, and the use of AI software to produce news content, the need to study trust in the news has gained significant traction. This study presents the findings of a trust in the news experiment designed in collaboration with Spanish and UK journalists, fact-checkers, and the CardiffNLP Natural Language Processing research group. The purpose of this experiment, conducted in June 2023, was to examine the extent to which people trust a set of fake news articles based on previously identified disinformation narratives related to gender, climate change, and COVID-19. The online experiment participants (801 in Spain and 800 in the UK) were asked to read three fake news items and rate their level of trust on a scale from 1 (not true) to 8 (true). The pieces used a combination of factors, including stance (favourable, neutral, or against the narrative), presence of toxic expressions, clickbait titles, and sources of information to test which elements influenced people's responses the most. Half of the pieces were produced by humans and the other half by ChatGPT. The results show that the topic of news articles, stance, people's age, gender, and political ideologies significantly affected their levels of trust in the news, while the authorship (humans or ChatGPT) does not have a significant impact.
- Europe > Spain (0.25)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Media > News (1.00)
- Health & Medicine > Therapeutic Area > Immunology (0.52)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.35)